Goto

Collaborating Authors

 hierarchical learning


Towards Understanding Hierarchical Learning: Benefits of Neural Representations

Neural Information Processing Systems

Deep neural networks can empirically perform efficient hierarchical learning, in which the layers learn useful representations of the data. However, how they make use of the intermediate representations are not explained by recent theories that relate them to ``shallow learners'' such as kernels. In this work, we demonstrate that intermediate \emph{neural representations} add more flexibility to neural networks and can be advantageous over raw inputs. We consider a fixed, randomly initialized neural network as a representation function fed into another trainable network. When the trainable network is the quadratic Taylor model of a wide two-layer network, we show that neural representation can achieve improved sample complexities compared with the raw input: For learning a low-rank degree-$p$ polynomial ($p \geq 4$) in $d$ dimension, neural representation requires only $\widetilde{O}(d^{\ceil{p/2}})$ samples, while the best-known sample complexity upper bound for the raw input is $\widetilde{O}(d^{p-1})$. We contrast our result with a lower bound showing that neural representations do not improve over the raw input (in the infinite width limit), when the trainable network is instead a neural tangent kernel. Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning.


ALMA: Hierarchical Learning for Composite Multi-Agent Tasks

Neural Information Processing Systems

Despite significant progress on multi-agent reinforcement learning (MARL) in recent years, coordination in complex domains remains a challenge. Work in MARL often focuses on solving tasks where agents interact with all other agents and entities in the environment; however, we observe that real-world tasks are often composed of several isolated instances of local agent interactions (subtasks), and each agent can meaningfully focus on one subtask to the exclusion of all else in the environment. In these composite tasks, successful policies can often be decomposed into two levels of decision-making: agents are allocated to specific subtasks and each agent acts productively towards their assigned subtask alone. This decomposed decision making provides a strong structural inductive bias, significantly reduces agent observation spaces, and encourages subtask-specific policies to be reused and composed during training, as opposed to treating each new composition of subtasks as unique. We introduce ALMA, a general learning method for taking advantage of these structured tasks. ALMA simultaneously learns a high-level subtask allocation policy and low-level agent policies. We demonstrate that ALMA learns sophisticated coordination behavior in a number of challenging environments, outperforming strong baselines. ALMA's modularity also enables it to better generalize to new environment configurations. Finally, we find that while ALMA can integrate separately trained allocation and action policies, the best performance is obtained only by training all components jointly.


Towards Understanding Hierarchical Learning: Benefits of Neural Representations

Neural Information Processing Systems

Deep neural networks can empirically perform efficient hierarchical learning, in which the layers learn useful representations of the data. However, how they make use of the intermediate representations are not explained by recent theories that relate them to shallow learners'' such as kernels. In this work, we demonstrate that intermediate \emph{neural representations} add more flexibility to neural networks and can be advantageous over raw inputs. We consider a fixed, randomly initialized neural network as a representation function fed into another trainable network. When the trainable network is the quadratic Taylor model of a wide two-layer network, we show that neural representation can achieve improved sample complexities compared with the raw input: For learning a low-rank degree- p polynomial ( p \geq 4) in d dimension, neural representation requires only \widetilde{O}(d {\ceil{p/2}}) samples, while the best-known sample complexity upper bound for the raw input is \widetilde{O}(d {p-1}) .


Review for NeurIPS paper: Towards Understanding Hierarchical Learning: Benefits of Neural Representations

Neural Information Processing Systems

Weaknesses: Again, I am not an expert, so my questions are conceptual, and my score should be interpreted as "undecided", but if I'm convinced by your answers or the other reviewers that my concerns are invalid, which is very probable, I'll recommend accepting. Since your paper explicitly wants to investigate the effect of having the neural representation, the additional whitening seems like it adds a second effect into the mix. If you need to do this because of the proofs, then at least it seems like you should also whiten the "raw" representations in section 3. Would the bounds for these change if you did? How much of the proofs rely on this whitening? How much is it really still a neural representation and not a simple random up-project of the data?


Towards Understanding Hierarchical Learning: Benefits of Neural Representations

Neural Information Processing Systems

Deep neural networks can empirically perform efficient hierarchical learning, in which the layers learn useful representations of the data. However, how they make use of the intermediate representations are not explained by recent theories that relate them to shallow learners'' such as kernels. In this work, we demonstrate that intermediate \emph{neural representations} add more flexibility to neural networks and can be advantageous over raw inputs. We consider a fixed, randomly initialized neural network as a representation function fed into another trainable network. When the trainable network is the quadratic Taylor model of a wide two-layer network, we show that neural representation can achieve improved sample complexities compared with the raw input: For learning a low-rank degree- p polynomial ( p \geq 4) in d dimension, neural representation requires only \widetilde{O}(d {\ceil{p/2}}) samples, while the best-known sample complexity upper bound for the raw input is \widetilde{O}(d {p-1}) .


ALMA: Hierarchical Learning for Composite Multi-Agent Tasks

Neural Information Processing Systems

Despite significant progress on multi-agent reinforcement learning (MARL) in recent years, coordination in complex domains remains a challenge. Work in MARL often focuses on solving tasks where agents interact with all other agents and entities in the environment; however, we observe that real-world tasks are often composed of several isolated instances of local agent interactions (subtasks), and each agent can meaningfully focus on one subtask to the exclusion of all else in the environment. In these composite tasks, successful policies can often be decomposed into two levels of decision-making: agents are allocated to specific subtasks and each agent acts productively towards their assigned subtask alone. This decomposed decision making provides a strong structural inductive bias, significantly reduces agent observation spaces, and encourages subtask-specific policies to be reused and composed during training, as opposed to treating each new composition of subtasks as unique. We introduce ALMA, a general learning method for taking advantage of these structured tasks.


Hierarchical Learning for Quantum ML: Novel Training Technique for Large-Scale Variational Quantum Circuits

Gharibyan, Hrant, Su, Vincent, Tepanyan, Hayk

arXiv.org Artificial Intelligence

We present hierarchical learning, a novel variational architecture for efficient training of large-scale variational quantum circuits. We test and benchmark our technique for distribution loading with quantum circuit born machines (QCBMs). With QCBMs, probability distributions are loaded into the squared amplitudes of computational basis vectors represented by bitstrings. Our key insight is to take advantage of the fact that the most significant (qu)bits have a greater effect on the final distribution and can be learned first. One can think of it as a generalization of layerwise learning, where some parameters of the variational circuit are learned first to prevent the phenomena of barren plateaus. We briefly review adjoint methods for computing the gradient, in particular for loss functions that are not expectation values of observables. We first compare the role of connectivity in the variational ansatz for the task of loading a Gaussian distribution on nine qubits, finding that 2D connectivity greatly outperforms qubits arranged on a line. Based on our observations, we then implement this strategy on large-scale numerical experiments with GPUs, training a QCBM to reproduce a 3-dimensional multivariate Gaussian distribution on 27 qubits up to $\sim4\%$ total variation distance. Though barren plateau arguments do not strictly apply here due to the objective function not being tied to an observable, this is to our knowledge the first practical demonstration of variational learning on large numbers of qubits. We also demonstrate hierarchical learning as a resource-efficient way to load distributions for existing quantum hardware (IBM's 7 and 27 qubit devices) in tandem with Fire Opal optimizations.


Hierarchical Learning of Dimensional Biases in Human Categorization

Neural Information Processing Systems

Existing models of categorization typically represent to-be-classified items as points in a multidimensional space. While from a mathematical point of view, an infinite number of basis sets can be used to represent points in this space, the choice of basis set is psychologically crucial. People generally choose the same basis dimensions, and have a strong preference to generalize along the axes of these dimensions, but not diagonally". What makes some choices of dimension special? We explore the idea that the dimensions used by people echo the natural variation in the environment. Specifically, we present a rational model that does not assume dimensions, but learns the same type of dimensional generalizations that people display. This bias is shaped by exposing the model to many categories with a structure hypothesized to be like those which children encounter. Our model can be viewed as a type of transformed Dirichlet process mixture model, where it is the learning of the base distribution of the Dirichlet process which allows dimensional generalization.The learning behaviour of our model captures the developmental shift from roughly "isotropic" for children to the axis-aligned generalization that adults show."


Structural hierarchical learning for energy networks

Leprince, Julien, Khan, Waqas, Madsen, Henrik, Møller, Jan Kloppenborg, Zeiler, Wim

arXiv.org Artificial Intelligence

Many sectors nowadays require accurate and coherent predictions across their organization to effectively operate. Otherwise, decision-makers would be planning using disparate views of the future, resulting in inconsistent decisions across their sectors. To secure coherency across hierarchies, recent research has put forward hierarchical learning, a coherency-informed hierarchical regressor leveraging the power of machine learning thanks to a custom loss function founded on optimal reconciliation methods. While promising potentials were outlined, results exhibited discordant performances in which coherency information only improved hierarchical forecasts in one setting. This work proposes to tackle these obstacles by investigating custom neural network designs inspired by the topological structures of hierarchies. Results unveil that, in a data-limited setting, structural models with fewer connections perform overall best and demonstrate the coherency information value for both accuracy and coherency forecasting performances, provided individual forecasts were generated within reasonable accuracy limits. Overall, this work expands and improves hierarchical learning methods thanks to a structurally-scaled learning mechanism extension coupled with tailored network designs, producing a resourceful, data-efficient, and information-rich learning process.


Galaxy Image Classification using Hierarchical Data Learning with Weighted Sampling and Label Smoothing

Ma, Xiaohua, Li, Xiangru, Luo, Ali, Zhang, Jinqu, Li, Hui

arXiv.org Artificial Intelligence

With the development of a series of Galaxy sky surveys in recent years, the observations increased rapidly, which makes the research of machine learning methods for galaxy image recognition a hot topic. Available automatic galaxy image recognition researches are plagued by the large differences in similarity between categories, the imbalance of data between different classes, and the discrepancy between the discrete representation of Galaxy classes and the essentially gradual changes from one morphological class to the adjacent class (DDRGC). These limitations have motivated several astronomers and machine learning experts to design projects with improved galaxy image recognition capabilities. Therefore, this paper proposes a novel learning method, ``Hierarchical Imbalanced data learning with Weighted sampling and Label smoothing" (HIWL). The HIWL consists of three key techniques respectively dealing with the above-mentioned three problems: (1) Designed a hierarchical galaxy classification model based on an efficient backbone network; (2) Utilized a weighted sampling scheme to deal with the imbalance problem; (3) Adopted a label smoothing technique to alleviate the DDRGC problem. We applied this method to galaxy photometric images from the Galaxy Zoo-The Galaxy Challenge, exploring the recognition of completely round smooth, in between smooth, cigar-shaped, edge-on and spiral. The overall classification accuracy is 96.32\%, and some superiorities of the HIWL are shown based on recall, precision, and F1-Score in comparing with some related works. In addition, we also explored the visualization of the galaxy image features and model attention to understand the foundations of the proposed scheme.